Join our Folding@Home team:
Main F@H site
Our team page
Support us: Subscribe Here
and buy SoylentNews Swag
We always have a place for talented people, visit the Get Involved section on the wiki to see how you can make SoylentNews better.
https://www.techspot.com/news/110441-oracle-credit-risk-hits-three-year-high-ai.html
A key measure of credit risk linked to Oracle has climbed to its highest level in three years, and Wall Street analysts warn that pressure is likely to intensify next year unless the company does more to explain how it will fund its artificial intelligence expansion. The shift reflects mounting anxiety over the scale, structure, and timing of Oracle's borrowing as it races to add data center capacity for AI workloads.
Morgan Stanley credit analysts Lindsay Tyler and David Hamburger describe several fault lines: a growing gap between spending and available funding, a balance sheet that continues to swell, and the possibility that assets built for today's AI architectures could become outdated faster than expected. They argue that these issues are now being priced directly into the cost of default protection on Oracle's debt.
The cost of five-year credit default swaps on Oracle rose to 1.25 percentage points a year in late November, according to ICE Data Services, marking the highest level since 2022. That means buyers of protection are paying 125 basis points annually to insure against a default over five years, a sharp step-up from earlier in the AI cycle. The swaps are now close enough to crisis-era territory that analysts are openly discussing whether the all-time peak set in 2008 could be challenged.
In their note, Tyler and Hamburger say Oracle's five-year CDS spread could break above 1.5 percentage points in the near term and might move toward two percentage points if the company continues to provide only limited detail about its financing strategy as 2026 approaches.
For reference, Oracle's default swaps hit a record 1.98 percentage points during the 2008 financial crisis. Oracle declined to comment on the assessment or the recent trading in its credit protection.
Oracle has become one of the main corporate symbols of AI-related credit risk because it relies heavily on debt markets to support its infrastructure plans. In September, the company raised $18 billion in the US investment-grade bond market, adding a large new slug of conventional corporate debt to its capital structure.
Weeks later, a syndicate of roughly 20 banks arranged an additional $18 billion project finance loan to build a major data center campus in New Mexico, with Oracle set to step in as the tenant once the facilities are completed.
On top of that, banks are assembling a separate $38 billion loan package to back the construction of data centers in Texas and Wisconsin being developed by Vantage Data Centers, where Oracle is expected to be the anchor tenant.
Over the past two months, Tyler and Hamburger say it has become clearer that these construction loans, rather than Oracle's traditional bond financing alone, are a major driver of hedging flows. The analysts also note that some of this hedging might unwind if and when the originating banks distribute pieces of the loans to other investors, though they stress that new holders may then choose to hedge as well.
The result is an environment in which both banks and bondholders use Oracle's CDS as a flexible tool to manage exposure to AI-linked credit risk. Morgan Stanley previously argued that near-term credit deterioration and uncertainty would fuel further hedging among traditional bond investors, direct lenders, and "thematic" players who want a macro way to trade the AI spending boom.
In their latest comments, the analysts say both the bondholder and thematic hedging dynamics could become more critical over time as the market internalizes the scale of Oracle's commitments.
This has had visible consequences in performance metrics. Oracle's CDS spreads have widened more than those of the broader investment-grade CDX index, indicating that investors are demanding a higher premium to insure Oracle than for the average high-grade borrower. At the same time, Oracle's cash bonds have lagged the Bloomberg high-grade corporate index, reflecting weaker demand for its debt amid hedging activity and heightened concerns about leverage.
The Morgan Stanley team has also changed its recommended trading stance. Earlier this year, the bank had favored a "basis trade" that involved buying Oracle bonds and simultaneously buying CDS protection, on the view that the derivative spreads would widen more than the underlying bond spreads.
Now, the analysts are abandoning the bond leg of that strategy. They maintain that an outright CDS trade is cleaner in the current environment and more likely to benefit from further spread widening if concerns about Oracle's funding plans, balance sheet trajectory, and AI spending persist. For investors, that recommendation underscores how a company at the center of the AI race has also become a preferred vehicle for expressing caution about the financial risks of that race.
Hacker Dave Cross has written a short blog post about how Perl's early success created the seeds of its downfall or, as he puts it, made it a victim of the Dotcom Survivor Syndrome. From the 90s through the 00s, Perl had been not just part of the WWW but in many ways instrumental in actually creating the WWW as we knew it in its prime. Perl and the community around it have improved a lot in the last 25 years, even if the versioning might disguise that fact.
To understand the long shadow Perl casts, you have to understand the speed and pressure of the dot-com boom.
We weren't just building websites.
We were inventing how to build websites.Best practices? Mostly unwritten.
Frameworks? Few existed.
Code reviews? Uncommon.
Continuous integration? Still a dream.The pace was frantic. You built something overnight, demoed it in the morning, and deployed it that afternoon. And Perl let you do that.
But that same flexibility—its greatest strength—became its greatest weakness in that environment. With deadlines looming and scalability an afterthought, we ended up with:
Thousands of lines of unstructured CGI scripts
Minimal documentation
Global variables everywhere
Inline HTML mixed with business logic
Security holes you could drive a truck throughWhen the crash came, these codebases didn't age gracefully. The people who inherited them, often the same people who now run engineering orgs, remember Perl not as a powerful tool, but as the source of late-night chaos and technical debt.
[...]
It did not help that there has been every appearance of ongoing M$ whisper campaigns maligning Perl since the 00s. For text processing, there is still nothing better. And, as has been pointed out countless times already, the WWW is text (i.e. XML and co).
Previously:
(2020) Announcing Perl 7
(2019) Perl Is Still The Goddess For Text Manipulation
(2017) Perl, the Glue That Holds the Internet (and SN) Together, Turns 30 This Year
It's a critical time for companies competing to develop a commercial successor to the International Space Station. NASA is working with several companies, including Axiom Space, Voyager Technologies, Blue Origin, and Vast, to develop concepts for private stations where it can lease time for its astronauts.
The space agency awarded Phase One contracts several years ago and is now in the final stages of writing requirements for Phase Two after asking for feedback from industry partners in September. This program is known as Commercial LEO Destinations, or CLDs in industry parlance.
Time is running out for NASA if it wants to establish continuity from the International Space Station, which will reach its end of life in 2030, with a follow-on station ready to go before then.
One of the more intriguing companies in the competition is Voyager Technologies, which recently announced a strategic investment from Janus Henderson, a global investment firm. In another sign that the competition is heating up, Voyager also just hired John Baum away from Vast, where he was the company's business development leader.
To get a sense of this competition and how Voyager is coming along with its Starlab space station project, Ars spoke with the firm's chairman, Dylan Taylor. This conversation has been lightly edited for clarity.
Ars: I know a lot of the companies working on CLDs are actively fundraising right now. How is this coming along for Voyager and Starlab?
Dylan Taylor: Fundraising is going quite well. You saw the Janus announcement. That's significant for a few reasons. One is, it's a significant investment. Of course, we're not disclosing exactly how much. (Editor's note: It likely is on the order of $100 million.) But the more positive development on the Janus investment is that they are such a well-known, well-respected financial investor.
If you look at the kind of bellwether investors, Janus would be up there with a Blackstone or Blackrock or Fidelity. So it's significant not only in terms of capital contribution, but in... showing that commercial space stations are investable. This isn't money coming from the Gulf States. It's not a syndication of a bunch of $1,000 checks from retail investors. This is a very significant institutional investor coming in, and it's a signal to the market. They did significant diligence on all our competitors, and they went out of the way saying that we're far and away the best business plan, best design, and everything else, so that's why it's so meaningful.
Ars: How much funding do you need to raise to complete Starlab?
Dylan Taylor: We currently estimate the cost to design, manufacture, and launch Starlab to be approximately $2.8 to $3.3 billion. And then if you look at what's anticipated in Phase Two in the NASA services contracts, it's about a $700 million capital plug that we need to raise in the market, and we're well on our way on that. We're not going to raise all of that now because obviously, after we win Phase Two, there will be a significant markup in valuation, and we'll have the ability to raise additional capital at that time. So we're only raising what we need at this stage of the project.
Ars: How are you coming as far as progress on your initial contract with NASA?
Dylan Taylor: We have our CDR (critical design review) coming up. It's December 15 to 18. We have achieved 27 milestones. We have four milestones left on our CLD Phase One contract.
Ars: You've changed your partners on the project a little bit. Where are you now on that?
Dylan Taylor: We moved the structure construction from Bremen, Germany, to Louisiana. That will be constructed by Vivace. So the structure will be made in the US. We have a significant presence, as you know, in Houston. We'll have it in Louisiana. And we just added Leidos to the team, so there'll be a big Huntsville component to our test and integration as well. So the key partners right now in terms of equity ownership and the joint venture are ourselves, Airbus, Mitsubishi, Palantir, Space Applications Services, and MDA. And then additional partners who are on the team that aren't equity holders include Northrup, Leidos, and Hilton Hotels.
Ars: What is your current timeline for development?
Dylan Taylor: We're still on 2029. I don't anticipate that pushing out for any reason in the near term. Obviously, if we had a significant delay on Phase Two selection, that could impact things. You know, some people think that we have Starship risk. In my view, I'm highly confident Starship will be ready to go when we're ready to launch. If it's not, based on the New Glenn upgrades that were recently announced, if they're successful in implementing those, then theoretically New Glenn could also launch us. As you know, we've got a launch agreement with SpaceX on Starship, so that's still the plan.
Ars: I would not consider a 2029 Starship launch date a major risk.
Dylan Taylor: Yeah, exactly. I'm not concerned about it. But there are people who are concerned. They bring it up a lot. Now, that being said, not to pick on the other players, but my understanding is Axiom has to launch on Falcon Heavy. I'm not sure SpaceX is that excited to do a Falcon Heavy launch, so in my mind, that could be a potential risk for them. Maybe, I don't know.
Ars: What was your reaction to the directive that came out in August from NASA interim administrator Sean Duffy on commercial space stations?
Dylan Taylor: I was surprised at the fact that they appeared to be backing off the requirements a bit. You know, I don't know where it (the Phase Two Request for Proposals from NASA) ends up. That's anybody's guess. But if I were to bet, I would think it would be more similar to the original procurement strategy than the memo. But we won't know until it comes out.
Ars: Obviously, there is still an interim administrator at NASA. We had a government shutdown for a month. What's your current understanding of the timeline for the Phase Two process?
Dylan Taylor: The last information we have is that they still expected to send the RFP out by the end of the year, and then have Phase Two selection sometime late Q1, early Q2 next year. That information was mostly communicated prior to the government shutdown. So I think with the government shutdown—I'm guessing here because I don't know—but I think you probably roll forward 45 days or so. If that's the case, we're probably looking at an RFP in January and a selection in probably in June or July. That's our best estimate based upon what we have been told.
Ars: We're now under five years from the International Space Station coming down. There's still a lot of work to be done for replacement. I think it's clear there are some challenges for this program, not speaking specifically about Starlab but just the general idea of commercial space stations. What advice would you have for Jared Isaacman to help make sure the CLD program is a success for NASA and the country?
Dylan Taylor: I know Jared, and I'm very optimistic. He's very, very smart, a very capable person. He's pro-commercial space. Based on his testimony and just what I know about him, he believes that commercial solutions are often better than government solutions. So I'm very optimistic he's going to be a transformational administrator. I think it's very good for the industry. I think the advice I would have for him on this program would be the same advice I'd have for him on all programs. And it's just simply clarity—clarity of mission, clarity of requirements, clarity of timeline, and the market will figure it out from there.
And specifically on CLDs, I think it's important they make a selection sooner rather than later. In my view, that selection should not just be a Space Act Agreement. It should be tied to a services commitment on the backside as well. I think that's important to signal who the chosen commercial space station successors are, whether there's two or three. I don't think there will be one. There shouldn't be one.
Ars: Has the government committed enough funding to make the program a success?
Dylan Taylor: I think this is where I might deviate from our competitors a bit. I think the answer is yes. I mean, if we have a reasonable amount of capital allocated in Phase Two and service contract commitments, the rest of the capital markets will be there. We demonstrated this with Janus and our IPO, frankly. Separately, we raised $430 million on a convertible note for Voyager, in 48 hours, two weeks ago, at an interest rate of 0.75 percent. The capital is there for well-run companies that are able to communicate the future of these projects to investors.
So the short answer is, yes, I think there is enough funding. I think where sometimes NASA might get the story a bit wrong is that they think they need to provide all the capital for these programs. And that's not really the case. They need to provide some of the capital. But most importantly, they need to provide the signal. We saw this on launch, right? I mean, NASA didn't fund all of SpaceX's development. They're certainly not funding all of Starship's development. But what they did do is they selected Commercial Cargo and Commercial Crew winners, and then SpaceX is probably the best example of being able to raise capital around that.
Ars: Do you think there are customers beyond NASA for these stations? I'm sure you must. But who are they?
Dylan Taylor: There's huge demand, Eric. Honestly, this has been one of my surprises. Over the last 12 months, and I really want to credit Axiom on this, with the PAM (private astronaut) missions, they really pioneered this notion of sovereign astronauts outside of the ISS consortium. There's huge demand from emerging countries with space agencies that want a sovereign astronaut, that want to send their astronauts to the ISS or to a safe and qualified and NASA-approved space station. So there is a lot of demand there.
We're in active discussions—I would say advanced discussions—with a lot of sovereign astronauts, and I fully anticipate that we're going to be oversubscribed when it comes to astronaut demand. And then on the commercial capacity, on the research side, we see huge demand for our commercial research capacity on Starlab. And just to remind you, we have 100 percent of the research capacity of the ISS, and we see demand in excess of our capacity. We're striking deals as we speak.
Windows takes a backseat on Dell's latest AI workstation as Linux gets the priority:
Dell has a solid track record with Linux-powered OSes, particularly Ubuntu. The company has been shipping developer-focused laptops with Ubuntu pre-installed for years.
Many of their devices come with compatible drivers working out of the box. Audio, Wi-Fi, Thunderbolt ports, and even fingerprint readers mostly work without hassle. My daily workhorse is a Dell laptop that hasn't had a driver-related issue for quite some time now.
And a recent launch just reinforces their Linux approach.
Dell just launched the Pro Max 16 Plus. It is being marketed as the first mobile workstation with an enterprise-grade discrete NPU, the Qualcomm AI 100 PC Inference Card. It packs 64GB of dedicated AI memory and dual NPUs on a single card.
Under the hood, you get Intel Core Ultra processors (up to Ultra 9 285HX), memory up to 256GB CAMM2 at 7200MT/s, GPU options up to NVIDIA RTX PRO 5000 Blackwell with 24GB VRAM, and storage topping out at 12TB with RAID support.
Interestingly, Phoronix has received word that the Windows 11 version of the Dell Pro Max 16 Plus won't ship until early 2026, while the validated Ubuntu 24.04 LTS version is already available.
With this, Dell is targeting professionals who can't rely on cloud inferencing. It says that the discrete NPU keeps data on-device while eliminating cloud latency, enabling work in air-gapped environments, disconnected locations, and compliance-heavy industries.
[Ed. note: NPU is a neural processing unit designed to accelerate AI and machine learning tasks]
Netflix will only support Google Cast on older devices without remotes:
Have you been trying to cast Stranger Things from your phone, only to find that your TV isn't cooperating? It's not the TV—Netflix is to blame for this one, and it's intentional. The streaming app has recently updated its support for Google Cast to disable the feature in most situations. You'll need to pay for one of the company's more expensive plans, and even then, Netflix will only cast to older TVs and streaming dongles.
The Google Cast system began appearing in apps shortly after the original Chromecast launched in 2013. Since then, Netflix users have been able to start video streams on TVs and streaming boxes from the mobile app. That was vital for streaming targets without their own remote or on-screen interface, but times change.
Today, Google has moved beyond the remote-free Chromecast experience, and most TVs have their own standalone Netflix apps. Netflix itself is also allergic to anything that would allow people to share passwords or watch in a new place. Over the last couple of weeks, Netflix updated its app to remove most casting options, mirroring a change in 2019 to kill Apple AirPlay.
The company's support site (spotted by Android Authority) now clarifies that casting is only supported in a narrow set of circumstances. First, you need to be paying for one of the ad-free service tiers, which start at $18 per month. Those on the $8 ad-supported plan won't have casting support.
Even then, Casting only appears for devices without a remote, like the earlier generations of Google Chromecasts, as well as some older TVs with Cast built in. For example, anyone still rocking Google's 3rd Gen Chromecast from 2018 can cast video in Netflix, but those with the 2020 Chromecast dongle (which has a remote and a full Android OS) will have to use the TV app. Essentially, anything running Android/Google TV or a smart TV with a full Netflix app will force you to log in before you can watch anything.
[...] Netflix has every reason to want people to log into its TV apps. After years of cheekily promoting password sharing, the company now takes a hardline stance against such things. By requiring people to log into more TVs, users are more likely to hit their screen limits. Netflix will happily sell you a more expensive plan that supports streaming to this new TV, though.
[...] So Netflix may have a good reason to think it can get away with killing casting. However, trying to sneak this one past everyone without so much as an announcement is pretty hostile to its customers.
Unless you have older hardware, you can't cast Netflix to your TV anymore:
Smart TVs have undoubtedly taken over the streaming space, and it's not hard to see why. You download the apps you want to use, log into your accounts, and presto: You can stream anything with a few clicks of your remote.
But smart TV apps aren't the only way people watch shows and movies on platforms like Netflix. Among other methods, like plugging a laptop directly into the TV, many people still enjoying casting their content from small screens to big screens. For years, this has been a reliable way to switch from watching Netflix on your smartphone or tablet to watching on your TV—you just tap the cast button, select your TV, and in a few moments, your content is beamed to the proper place. Your device becomes its own remote, with search built right-in, and it avoids the need to sign into Netflix on TVs outside your home, such as when staying in hotels.
At least it did, but Netflix no longer wants to let you do it.
[...] Netflix doesn't explain why it's making the change, so I can only speculate. First, it's totally possible this is simply a tech obsolescence issue. Many companies drop support for older or underused technologies, and perhaps Netflix sees now as the time to largely drop support for casting. Streamlining the tech the app has to support means less work for Netflix developers, and it wouldn't be the first time the company dropped support for older platforms. However, that doesn't really explain why the company still supports some devices for casting. Maybe it took a look at its user base, and made the calculation that enough subscribers relied on Google Cast devices for casting, but not enough relies on newer hardware for casting. We might not really know unless Netflix decides to issue a statement.
That said, I can't help but feel like this is related to Netflix's crackdown on password sharing. The company clearly doesn't want you using its services unless you have your own paid account—or have another user pay extra to have you on their account. Casting, however, makes it easy to continue using someone else's account without paying for it. Since Netflix only requires mobile users to log into the account owner's home wifi once a month to continue watching on a device, you could theoretically cast Netflix from your smartphone to your TV to continue enjoying your shows and movies "for free." By removing casting as an option for most users, those users will either need to connect a device to the TV by wire—like a laptop connected via HDMI—or log into the smart TV app. And if those users don't actually have permission to access that account via that app, they won't be able to stream.
If this really is the company's intention, it's doing so at the inconvenience of paying users, too. If you're traveling, you now need to bother with signing into your account on a TV you don't own. If you don't like using your smart TV apps, you're kind of out of luck, unless you want to deal with connecting a computer to your TV whenever you want to catch up on Stranger Things.
Were any Soylentils doing this?
AI red-teamers in Korea show how easily the model spills dangerous biochemical instructions:
Google's newest and most powerful AI model, Gemini 3, is already under scrutiny. A South Korean AI-security team has demonstrated that the model's safety net can be breached, and the results may raise alarms across the industry.
Aim Intelligence, a startup that tests AI systems for weaknesses, decided to stress-test Gemini 3 Pro and see how far it could be pushed with a jailbreak attack. Maeil Business Newspaper reports that it took the researchers only five minutes to get past Google's protections.
The researchers asked Gemini 3 to provide instructions for making the smallpox virus, and the model responded quickly. It provided many detailed steps, which the team described as "viable."
This was not just a one-off mistake. The researchers went further and asked the model to make a satirical presentation about its own security failure. Gemini replied with a full slide deck called "Excused Stupid Gemini 3."
[...] The AI security testers say this is not just a problem with Gemini. Newer models are becoming so advanced so quickly that safety measures cannot keep up. In particular, these models do not just respond; they also try to avoid detection. Aim Intelligence states that Gemini 3 can use bypass strategies and concealment prompts, rendering simple safeguards far less effective.
[...] If a model strong enough to beat GPT-5 can be jailbroken in minutes, consumers should expect a wave of safety updates, tighter policies, and possibly the removal of some features. AI may be getting smarter, but the defenses protecting users don't seem to be evolving at the same pace.
A blog post covers why datacenters in space are a terrible, horrible, no good idea. Thermal management is just the beginning of the long list of challenges which make space an inferior environment for data centers.
In the interests of clarity, I am a former NASA engineer/scientist with a PhD in space electronics. I also worked at Google for 10 years, in various parts of the company including YouTube and the bit of Cloud responsible for deploying AI capacity, so I'm quite well placed to have an opinion here.
The short version: this is an absolutely terrible idea, and really makes zero sense whatsoever. There are multiple reasons for this, but they all amount to saying that the kind of electronics needed to make a datacenter work, particularly a datacenter deploying AI capacity in the form of GPUs and TPUs, is exactly the opposite of what works in space. If you've not worked specifically in this area before, I'll caution against making gut assumptions, because the reality of making space hardware actually function in space is not necessarily intuitively obvious.
Previously:
(2025) The Data Center Resistance Has Arrived
(2025) Microsoft: the Company Doesn't Have Enough Electricity to Install All the AI GPUs in its Inventory
(2025) China Submerges a Data Center in the Ocean to Conserve Water, is That Even a Good Idea?
(2025) Data Centers Turn to Commercial Aircraft Jet Engines Bolted Onto Trailers as AI Power Crunch Bites
(2025) The Real (Economic) AI Apocalypse is Nigh
(2025) Real Datacenter Emissions Are A Dirty Secret
... and more.
Flock, the automatic license plate reader and AI-powered camera company, uses overseas workers from Upwork to train its machine learning algorithms, with training material telling workers how to review and categorize footage including images people and vehicles in the United States, according to material reviewed by 404 Media that was accidentally exposed by the company.
The findings bring up questions about who exactly has access to footage collected by Flock surveillance cameras and where people reviewing the footage may be based. Flock has become a pervasive technology in the US, with its cameras present in thousands of communities that cops use every day to investigate things like carjackings. Local police have also performed numerous lookups for ICE in the system.
Companies that use AI or machine learning regularly turn to overseas workers to train their algorithms, often because the labor is cheaper than hiring domestically. But the nature of Flock's business—creating a surveillance system that constantly monitors US residents' movements—means that footage might be more sensitive than other AI training jobs.
Flock's cameras continuously scan the license plate, color, brand, and model of all vehicles that drive by. Law enforcement are then able to search cameras nationwide to see where else a vehicle has driven. Authorities typically dig through this data without a warrant, leading the American Civil Liberties Union and Electronic Frontier Foundation to recently sue a city blanketed in nearly 500 Flock cameras.
Broadly, Flock uses AI or machine learning to automatically detect license plates, vehicles, and people, including what clothes they are wearing, from camera footage. A Flock patent also mentions cameras detecting "race."
It included figures on "annotations completed" and "annotator tasks remaining in queue," with annotations being the notes workers add to reviewed footage to help train AI algorithms. Tasks include categorizing vehicle makes, colors, and types, transcribing license plates, and "audio tasks." Flock recently started advertising a feature that will detect "screaming." The panel showed workers sometimes completed thousands upon thousands of annotations over two day periods.
The exposed panel included a list of people tasked with annotating Flock's footage. Taking those names, 404 Media found some were located in the Philippines, according to their LinkedIn and other online profiles.
Many of these people were employed through Upwork, according to the exposed material. Upwork is a gig and freelance work platform where companies can hire designers and writers or pay for "AI services," according to Upwork's website.
The tipsters also pointed to several publicly available Flock presentations which explained in more detail how workers were to categorize the footage. It is not clear what specific camera footage Flock's AI workers are reviewing. But screenshots included in the worker guides show numerous images from vehicles with US plates, including in New York, Michigan, Florida, New Jersey, and California. Other images include road signs clearly showing the footage is taken from inside the US, and one image contains an advertisement for a specific law firm in Atlanta.
One slide about audio told workers to "listen to the audio all the way through," then select from a drop-down menu including "car wreck," "gunshot," and "reckless driving." Another slide says tire screeching might be associated with someone "doing donuts," and another says that because it can be hard to distinguish between an adult and a child screaming, workers should use a second drop-down menu explaining their confidence in what they heard, with options like "certain" and "uncertain."
Another slide deck explains that workers should not label people inside cars but should label those riding motorcycles or walking.
After 404 Media contacted Flock for comment, the exposed panel became no longer available. Flock then declined to comment.
The KDE project has made the call:
Well folks, it's the beginning of a new era: after nearly three decades of KDE desktop environments running on X11, the future KDE Plasma 6.8 release will be Wayland-exclusive! Support for X11 applications will be fully entrusted to Xwayland, and the Plasma X11 session will no longer be included.
↫ The Plasma TeamThey're following in the footsteps of the GNOME project, who will also be leaving the legacy windowing system behind. What this means in practice is that official KDE X11 support will cease once KDE Plasma 6.7 is no longer supported, which should be somewhere early 2026. Do note that the KDE developers intend to release a few extra bugfix releases in the 6.7 release cycle to stabilise the X11 session as much as possible for those people who are going to stick with KDE Plasma 6.7 to keep X11 around.
For people who wish to keep using X11 after that point, the KDE project advises them to switch to LTS distributions like Alma Linux, which intend to keep supporting Plasma X11 until 2032. Xwayland will handle virtually all X11 applications running inside the Wayland session, including X11 forwarding, with similar functionality implemented in Wayland through Waypipe. Also note that this only applies to Plasma as a whole; KDE applications will continue to support X11 when run in other desktop environments or on other platforms.
As for platforms other than Linux – FreeBSD already has relatively robust Wayland support, so if you intend to run KDE on FreeBSD in the near future, you'll have to move over to Wayland there, as well. The other BSD variants are also dabbling with Wayland support, so it won't be long before they, too, will be able to run the KDE Plasma Wayland session without any issues.
What this means is that the two desktop environments that probably make up like 95% of the desktop Linux user base will now be focusing exclusively on Wayland, which is great news. X11 is a legacy platform and aside from retrocomputing and artisanal, boutique setups, you simply shouldn't be using it anymore. Less popular desktop environments like Xfce, Cinnamon, Budgie, and LXQt are also adding Wayland support, so it won't be much longer before virtually no new desktop Linux installations will be using X11.
One X down, one more to go.
The Japanese chipmaker is looking to take on established fabs:
Rapidus, Japan's homegrown challenger to Taiwan Semiconductor Manufacturing Company (TSMC), has announced that it will start building its next-generation 1.4-nanometer fab in fiscal year 2027, with production expected to commence in Hokkaido in 2029. According to Nikkei Asia, this move is expected to help the Japanese chipmaker close the gap with the Taiwanese chip-making giant, which has already revealed its 1.4-nm technology earlier this year. The company also said that it will begin full-scale research and development on the node starting next year.
The company is backed by several Japanese companies, including giants such as Toyota and Sony, as well as private financing institutions. Aside from this, the Japanese government has also invested heavily in the startup through subsidies and direct fiscal support. Rapidus has already received a commitment of JPY 1.7 trillion, or more than US$10 billion, with several hundred billion Yen expected to be infused into the company in the coming months.
Despite these massive inflows, Rapidus is still facing an uphill battle as it competes with established fabs like TSMC, Samsung, and Intel. Intel has already started production of 18A, its 2-nm class node, while TSMC is also moving up plans to output its latest node at its Arizona site due to strong AI data center demand. On the other hand, the Japanese chip maker is only expected to begin 2-nm mass production in the latter half of 2027 at its Chitose manufacturing plant. More than that, all the established foundries have struggled with yield issues before they were able to proceed with mass production, suggesting that Rapidus will experience the same problems.
Nevertheless, the company is still intent on pushing forward with its more advanced nodes even though it's playing catch-up with its 2-nm process. Aside from the expected 1.4-nm node that will be produced in the Hokkaido plant, Nikkei Asia also said that more advanced 1-nm chips may also be manufactured at the site.
Rapidus aims to compete against TSMC but has previously said that it's only targeting a handful of companies — around five to ten, initially. The Japanese chipmaker has also claimed that its advanced packaging technique will make the production cycle, allowing it to streamline its processes versus its competitors. Nevertheless, former Intel CEO Pat Gelsinger said that it needs to offer something more advanced than that to successfully compete with established chip makers.
https://distrowatch.com/dwres.php?resource=showheadline&story=20099
People running the Thumbleweed branch of openSUSE will soon have the chance to try out the distribution's new bootloader package. An openSUSE blog post explains the change:
"openSUSE Tumbleweed recently changed the default boot loader from GRUB2 to GRUB2-BLS when installed via YaST.
This follows the trend started by MicroOS of adopting boot loaders that are compatible with the boot loader specification. MicroOS is using systemd-boot, which is a very small and fast boot loader from the systemd project.
One of the reasons for this change is to simplify the integration of new features. Among them is full disk encryption based on systemd tools, which will make use of TPM2 or FIDO2 tokens if they are available.
What is GRUB2-BLS? GRUB2-BLS is just GRUB2 but with some patches on top ported from the Fedora project, which includes some compatibility for the boot loader specification for Type #1 boot entries. Those are small text files stored in /boot/efi/loader/entries that the boot loader reads to present the initial menu."
The change will allow full disk encryption and do away with some of the GRUB maintenance steps. Details are discussed in the project's blog post.
The reflected infrared light of bone-loving lichen can be detected by drones
Tiny life-forms with bright colors might point the way to big dinosaur bone discoveries.
In the badlands of western Canada, two species of lichen prefer making their homes on dinosaur bones instead of on the surrounding desert rock, and their distinct orange color can be detected by drones, possibly aiding future dino discoveries, researchers report November 3 in Current Biology.
"Rather than finding new sites serendipitously, this approach can help paleontologists to locate new areas that are likely to have fossils at the surface and then go there to investigate," says paleontologist Brian Pickles at the University of Reading in England.
Lichens are photosynthetic organisms built by a symbiotic relationship between fungi and algae or cyanobacteria. They come in many colors. Some are white or near-black; others appear green, yellow, orange or red. They often grow in challenging environments, such as deserts or polar regions.
Lichens tend to be quite picky about where they grow, says AJ Deneka, a lichenologist at Carleton University in Ottawa, Canada, who was not involved with the research. Species that grow on granite do not grow on sandstone or limestone and species that grow on wood don't grow on rock.
Dinosaur bones covered in lichen have long been known to paleontologists working in desert fossil hotspots of western North America. In 1922, paleontologists found an Ankylosaurus fossil covered in orange lichen in the Canadian badlands. In 1979, a similarly colored lichen was reported growing over a Centrosaurus bonebed in the same area. The orange-colored symbiote is often the first thing researchers notice when working in these regions, with the discovery of bone coming second.
By scrutinizing vibrantly colored lichen and where it grows in Dinosaur Provincial Park in Alberta, Pickles and his colleagues found that two species of lichen, Rusavskia elegans and Xanthomendoza trachyphylla, had a strict preference for colonizing fossil bones and were almost entirely absent from surrounding ironstone rock.
"The porous texture of fossils probably plays a role in making them [a] suitable lichen habitat, perhaps by retaining moisture or providing tiny pockets where lichen [can] become trapped and established," Deneka says.
Pickles and his colleagues next measured light frequencies reflected by the rock, bones and bone-inhabiting lichen and tested whether they could distinguish the lichen from these surroundings using drones. Spectral analyses found the lichen primarily reflected certain infrared light frequencies, which the researchers then used to develop drone sensors that could detect this light from above.
Using these drones, the researchers were able to identify fossil bonebeds from a height of 30 meters. "We could only locate the fossils thanks to the lichen association," Pickles says.
The technique "has great potential for use in little-explored or difficult-to-access areas," says Renato García, a paleontologist at Universidad Nacional de Avellaneda in Buenos Aires, who was not involved with the research. In 2020, García and his colleagues uncovered a similar predilection of certain lichen toward fossil penguin bones in Antarctica, hinting at another region where this work may be fruitful.
Pickles and his team have their own plan: "Other badlands are our next target."
Journal Reference: Pickles, Brian J. et al. Remote sensing of lichens with drones for detecting dinosaur bones. [OPEN] Current Biology, Volume 35, Issue 21, R1044 - R1045 https://doi.org/10.1016/j.cub.2025.09.036
During a Dell earnings call, the company mentioned some staggering numbers regarding the amount of PCs that will not or cannot be upgraded to Windows 11.
"We have about 500 million of them capable of running Windows 11 that haven't been upgraded," said Dell COO Jeffrey Clarke on a Q3 earnings call earlier this week, referring to the overall PC market, not just Dell's slice of machines. "And we have another 500 million that are four years old that can't run Windows 11." He sees this as an opportunity to guide customers towards the latest Windows 11 machines and AI PCs, but warns that the PC market is going to be relatively flat next year.
↫ Tom Warren at The Verge
The monumental scale of the Windows 10 install base that simply won't or cannot upgrade to Windows 11 is massive, and it's absolutely bonkers to me that we're mostly just letting them get away with leaving at least a billion users out in the cold when it comes to security updates and bug fixes. The US government (in better times) and the EU should've 100% forced Microsoft's hand, as leaving this many people on outdated, unsupported operating system installations is several disasters waiting to happen.
Aside from the dangerous position Microsoft is forcing its Windows 10 users into, there's also the massive environmental and public health impact of huge swaths of machines, especially in enterprise environments, becoming obsolete overnight. Many of these will end up in landfills, often shipped to third-world countries so we in the west don't have to deal with our e-waste and its dangerous consequences directly. I can get fined for littering – rightfully so – but when a company like Microsoft makes sweeping decisions which cause untold amounts of dangerous chemicals to be dumped in countless locations all over the globe, governments shrug it off and move on.
At least we will get some cheap eBay hardware out of it, I guess.
https://phys.org/news/2025-11-scientists-mountain-climate-faster-billions.html
Mountains worldwide are experiencing climate change more intensely than lowland areas, with potentially devastating consequences for billions of people who live in and/or depend on these regions, according to a major global review.
The international study, published in Nature Reviews Earth & Environment, examines what scientists call "elevation-dependent climate change" (EDCC)—the phenomenon where environmental changes can accelerate at higher altitudes.
It represents the most thorough analysis to date of how temperature, rainfall, and snowfall patterns are shifting across the world's mountain ranges.
Led by Associate Professor Dr. Nick Pepin from the University of Portsmouth, the research team analyzed data from multiple sources including global gridded datasets, alongside detailed case studies from specific mountain ranges including the Rocky Mountains, the Alps, the Andes, and the Tibetan Plateau.
The findings reveal alarming trends between 1980 and 2020:
- Temperature: Mountain regions, on average, are warming 0.21°C per century faster than surrounding lowlands
- Precipitation and snow: Mountains are experiencing more unpredictable rainfall and a significant change from snow to rain
"Mountains share many characteristics with Arctic regions and are experiencing similarly rapid changes," said Dr. Pepin from the University of Portsmouth's Institute of the Earth and Environment.
"This is because both environments are losing snow and ice rapidly and are seeing profound changes in ecosystems. What's less well known is that as you go higher into the mountains, the rate of climate change can become even more intense."
The implications extend far beyond mountain communities. Over one billion people worldwide depend on mountain snow and glaciers for water, including in China and India—the world's two largest countries by population—who receive water from the Himalayas.
Dr. Pepin added, "The Himalayan ice is decreasing more rapidly than we thought. When you transition from snowfall to rain because it has become warmer, you're more likely to get devastating floods. Hazardous events also become more extreme."
"As temperatures rise, trees and animals are moving higher up the mountains, chasing cooler conditions. But eventually, in some cases, they'll run out of mountain and be pushed off the top. With nowhere left to go, species may be lost and ecosystems fundamentally changed."
Recent events highlight the urgency. Dr. Pepin points to this summer in Pakistan, which experienced some of its deadliest monsoon weather in years, with cloudbursts and extreme mountain rainfall killing over 1,000 people.
This latest review builds on the research team's 2015 paper in Nature Climate Change, which was the first to provide comprehensive evidence that mountain regions were warming more rapidly higher up in comparison to lower down. That study identified key drivers including the loss of snow and ice, increased atmospheric moisture, and aerosol pollutants.
Ten years on, scientists have made progress understanding the controls of such change and the consequences, but the fundamental problem remains.
"The issue of climate change has not gone away," explained Dr. Pepin. "We can't just tackle mountain climate change independently of the broader issue of climate change."
A major obstacle remains the scarcity of weather observations in the mountains. "Mountains are harsh environments, remote, and hard to get to," said Dr. Nadine Salzmann from the WSL Institute for Snow and Avalanche Research SLF in Davos, Switzerland. "Therefore, maintaining weather and climate stations in these environments remains challenging."
This data gap means scientists may be underestimating how quickly temperatures are changing and how fast snow will disappear. The review also calls for better computer models with higher spatial resolution—typically most current models can only track changes every few kilometers, but conditions can vary dramatically between slopes just meters apart.
Dr. Emily Potter from the University of Sheffield added, "The good news is that computer models are improving. But better technology alone isn't enough—we need urgent action on climate commitments and significantly improved monitoring infrastructure in these vulnerable mountain regions."
More information: Elevation-dependent climate change in mountain environments, Nature Reviews Earth & Environment (2025). DOI: 10.1038/s43017-025-00740-4
Folks, we have some revolutionary sociological research to share with you today.
After making a guy dressed as Batman stand around in a subway car, a team of researchers found that the behavior of people around him suddenly improved the moment he showed up. No longer was everyone completely self-involved; with the presence of a superhero, commuters started helping each other more than they would've without him around.
Behold: the "Batman effect."
The findings of the unorthodox study, published in the journal npj Mental Health Research, demonstrate the power of introducing something offbeat into social situations to jolt people out of the mental autopilot they slip into to navigate the drudgery of everyday life.
Batman showing up is just one — albeit striking — way of promoting what's called "prosocial behavior," or the act of helping others around you, via introducing an unexpected event, the researchers write.
"Our findings are similar to those of previous research linking present-moment awareness (mindfulness) to greater prosociality," said study lead author Francesco Pagnini, a professor of clinical psychology at the Università Cattolica in Milan, in a statement about the work. "This may create a context in which individuals become more attuned to social cues."
In a series of experiments, the researchers had a woman who visibly appeared pregnant enter a busy train, and observed how often people offered to give up their seats. They then repeated this scenario with a crucial change: when the pregnant woman entered the train from one door, a man dressed as Batman entered from another.
In all, the team observed 138 passengers, and the results were clear-cut. Over 67 percent of passengers offered their seats when Batman was present, compared to just over 37 percent when Batman wasn't there. Most, in both cases, were women: 68 percent with Batman there, and 65 without him.
But the most strange detail? 44 percent of the people who offered their seats later reported that they didn't even notice Batman was there in the first place, suggesting that they don't need to be consciously aware of the offbeat event itself to, in colloquial terms, pick up the prosocial vibes.
"Unlike traditional mindfulness interventions that require active engagement, this study highlights how situational interruptions alone may be sufficient to produce similar effects," Pagnini said.
In the study, he added the findings "could inform strategies to promote altruistic behaviors in daily life, from public art installations to innovative social campaigns.
Journal Reference: Pagnini, F., Grosso, F., Cavalera, C. et al. Unexpected events and prosocial behavior: the Batman effect. npj Mental Health Res 4, 57 (2025).
See also: The 'Batman Effect' -- How Having an Alter Ego Empowers You